Goto

Collaborating Authors

 high-level expert group


A Rapid Review of Responsible AI frameworks: How to guide the development of ethical AI

Barletta, Vita Santa, Caivano, Danilo, Gigante, Domenico, Ragone, Azzurra

arXiv.org Artificial Intelligence

In the last years, the raise of Artificial Intelligence (AI), and its pervasiveness in our lives, has sparked a flourishing debate about the ethical principles that should lead its implementation and use in society. Driven by these concerns, we conduct a rapid review of several frameworks providing principles, guidelines, and/or tools to help practitioners in the development and deployment of Responsible AI (RAI) applications. We map each framework w.r.t. the different Software Development Life Cycle (SDLC) phases discovering that most of these frameworks fall just in the Requirements Elicitation phase, leaving the other phases uncovered. Very few of these frameworks offer supporting tools for practitioners, and they are mainly provided by private companies. Our results reveal that there is not a "catching-all" framework supporting both technical and non-technical stakeholders in the implementation of real-world projects. Our findings highlight the lack of a comprehensive framework encompassing all RAI principles and all (SDLC) phases that could be navigated by users with different skill sets and with different goals.


Z-Inspection: A holistic and analytic process to assess Ethical AI

#artificialintelligence

To address the concern for the ethical and societal implications of artificial intelligence systems, a possible solution is to have AI systems be audited for harm by investigators. We at the Frankfurt Big Data Lab at the Goethe University of Frankfurt, together with a team of international experts defined a novel holistic and analytic processes to assess Ethical AI, called Z-Inspection. Z-Inspection is a general inspection process for Ethical AI which can be applied to a variety of domains such as business, healthcare, public sector, etc. To the best of our knowledge, Z-Inspection is the first process that combines a holistic and analytic approach to assess Ethical AI in practice. Our assessment takes into account the "Framework for Trustworthy AI" and the seven key requirements that AI systems should meet in order to be deemed trustworthy, defined by the independent High-Level Expert Group of Artificial Intelligence [1], set by the European Commission, and also confirmed by a recent report of The Organization for Economic Co-operation and Development (OECD).


Global Big Data Conference

#artificialintelligence

According to the National Venture Capital Association, 1,356 AI-related companies raised $18.457 billion in 2019 in the US. Last summer, Senior Fellow & SVP at Google AI Jeff Dean calculated that around 100 new machine learning papers appeared on Arrive every day by the end of 2018--and that wasn't the peak. Data scientists are beginning to specialize. Engineering roles focus on data and machine learning in production systems. Science roles focus more on analytics and decision support.


Europe sets out to build its own brand of AI

#artificialintelligence

Just look at the reports on AI. When I asked one expert last week how many of these documents are out there, she said at least 120. The von der Leyen Commission's much anticipated White Paper dropped on February 19th. It follows the statement on AI in her own political guidelines, and the previous Commission's Communication on Artificial Intelligence for Europe from last spring. And it's accompanied by the Commission's Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.


Trustworthy artificial intelligence – is new EU regulation coming for AI?

#artificialintelligence

Barry O'Sullivan from the Insight Research Centre for Data Analytics writes about the EU's plans for regulating AI – and what opportunities it could hold for Ireland. The new president of the European Commission, Ursula von der Leyen, committed to introducing a new European regulation for artificial intelligence (AI) in Europe during her first 100 days in office. While a fully fledged regulation is unlikely in that timeframe, we can expect to see a vision for a new regulatory framework for AI in Europe very soon, possibly this month. What can we expect from such a regulation and what should AI developers and businesses be doing to prepare for it? The EU is positioning itself as a leader in trustworthy, human-centric artificial intelligence.


The EU Should Not Regulate Artificial Intelligence As A Separate Technology

#artificialintelligence

A report from the recent conference on Computers, Privacy and Data Protection suggested that the European Commission is "considering the possibility of legislating for Artificial Intelligence." Karolina Mojzesowicz, Deputy Head, Data Protection Unit at the European Commission, said that the Commission is "assessing whether national and EU frameworks are fit for purpose for the new challenges." The Commission is exploring, for instance, whether to specify "how big a margin of error is acceptable in automated decisions and machine learning." The vehicle for this regulatory effort seems to be the draft Ethics Guidelines developed by a high-level expert group. The comment period on this draft closed on February 1, and a final report is due in March.


AI Stakeholder Consultation - FUTURIUM - European Commission

#artificialintelligence

Welcome to the consultation on the Draft AI Ethics Guidelines of the High-Level Expert Group on Artificial Intelligence (AI HLEG). The AI HLEG was established by the European Commission in June 2018 to support the implementation of its Strategy on Artificial Intelligence and to prepare two deliverables: (1) AI Ethics Guidelines and (2) Policy and Investment Recommendations. The first draft of the Guidelines, published on 18 December, is now open for consultation until 1 February (i.e. a two-week extension from the original deadline of 18 January, in order to allow the opportunity for more feedback). You can comment on or contribute to the individual sections of the draft, as well as provide your general feedback through the following consultation form. The AI HLEG also published a Definition on artificial intelligence which accompanies its deliverable.


High-Level Expert Group on Artificial Intelligence

#artificialintelligence

The High-Level Expert Group on Artificial Intelligence (AI HLEG) will have as a general objective to support the implementation of the European strategy on Artificial Intelligence. This will include the elaboration of recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges. Moreover, the AI HLEG will serve as the steering group for the European AI Alliance's work, interact with other initiatives, help stimulate a multi-stakeholder dialogue, gather participants' views and reflect them in its analysis and reports. Advise the Commission on next steps addressing AI-related mid to long-term challenges and opportunities through recommendations which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy. In May 2019, the AI HLEG will also put forward policy & investment recommendation on how to strengthen Europe's competitiveness in AI, including guidance for a strategic research agenda on AI and on the establishment of a network of AI excellence centres. On the 18th December 2018 the AI HLEG proposed the first draft AI ethics guidelines to the Commission, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination.


OKRA CEO helps shape EU governance of AI Business Weekly Technology News Business news

#artificialintelligence

The CEO of a Cambridge-based Artificial Intelligence startup is helping to fashion pan-European governance and regulation of the rocketing technology segment. Dr Loubna Bouarfa of OKRA Technologies is one of 52 thought leaders on the advisory brains trust – the EU High-Level Expert Group on AI – and spent a week at a Brussels summit. Discussions were held in the European Parliament covering topics from programming bias, data security, error accountability and responsible workflows. As the founder and CEO of OKRA, a St John's Innovation Centre company, Dr Bouarfa represents AI practitioners and the healthcare space in particular. She said: "It is an honour to represent the innovative AI startup ecosystem, balancing an operational mindset with my ethical convictions. In my work at OKRA and beyond, ethical considerations around responsibility, transparency and accountability are always top of mind."


Philosophers Appointed To High-Level Expert Group on Artificial Intelligence - Daily Nous

#artificialintelligence

The European Commission (EC), which proposes and administers European Union (EU) law and policy, has created a new High-Level Expert Group on Artificial Intelligence, the aim of which is to advise on the crafting and implementation of the EU's strategy on artificial intelligence. Among the 52 experts are several people who work in philosophy. You can read more about the group here.

  Country: Europe (1.00)
  Industry: Government > Regional Government > Europe Government (1.00)